Image
Cybersecurity and AI | Egnyte Financial Services

Five Ways to Leverage AI Safely and Responsibly

Artificial Intelligence (AI) is super-charging customer service, amplifying personalized product recommendations, and accelerating workflows that enable humans to focus on higher-value tasks. However, AI cannot deliver desired productivity improvements to financial organizations without foundational security protection in place. 

In this blog, I recap several best practices that empower financial institutions to leverage AI safely and responsibly. My perspectives were originally shared in a session at Egnyte’s Financial Services Summit, and you’ll find a link to the recorded session at the end of this article.  

Balance the fine line between AI adoption and risk management. 

One of the most significant challenges about AI is that it can mean different things to different people. For this blog, we’ll use the definition of AI in this April 3, 2024, McKinsey & Company article: “Artificial Intelligence is a machine’s ability to perform cognitive functions we usually associate with human minds.” That definition of AI includes Artificial Intelligence, previously referred to as “machine learning.”  

To deploy AI responsibly, you’ll need to balance the fine line between AI adoption and risk management by: 

  • Creating a company-wide responsible AI usage policy that’s shared with all of your users, including provisions for coaching when inevitable policy violations occur.
  • Encouraging your team to think beyond AI's technical capabilities and focus on strategic benefits, such as threat detection, compliance automation, and error resolution.
  • Partnering with your business lines to understand how they currently use AI, so you can make informed recommendations about how to do so more safely.  

Encourage responsible AI usage by your users. 

Once you’ve established an AI usage policy, you’ll need to continually reinforce the importance of responsible usage with your users. The policy will inevitably need to be updated over time. You can find the AI Institute’s sample policy template here.  

To maximize responsible AI usage at your organization: 

  • Align your AI usage policy with your organization’s culture and risk appetite.
  • Set usage expectations from the top of the organization by specifically outlining which AI solutions your organization permits and how they should be used.
  • Be explicit in your policy about what actions your users should and shouldn’t take.
  • Consider engaging a third-party expert or performing online research for fresh perspectives on best practices.
  • Be sure to model positive AI behavior in interactions with your user base. In other words, AI guidelines should apply to everyone at the company.   

Involve your executive team from the start. 

As mentioned in the previous section, it's important to establish AI usage policies at the organizational level.   

To involve your executive team most effectively, you need to: 

  • Build a formal business case for AI implementation. TechTarget provides a sample AI business case template that you can use as a resource.
  • Reinforce the idea that AI is not meant to replace employees; rather, it’s meant to make current employees more productive.
  • Educate executives about productivity improvements resulting from AI adoption and assuage their technological fears by offering simple use-case demonstrations.
  • Keep communications business-driven and in plain English, with a minimum of acronyms and technical terminology.  

View AI adoption as a “program” rather than as a “project.” 

As I mentioned at the FSI Summit, AI products aren’t as new as we might think. For example, did you know that the Dartmouth Summer Research Project on Artificial Intelligence established the field of AI research in 1956? So, you need to view AI as an ongoing IT program, not a temporary IT project.  

Here’s how to do so: 

  • Establish a working committee focused on AI implementation, adoption, and ongoing maintenance. Include representatives from across your company.
  • Determine what organizations your company can partner with to further AI education and keep your AI solutions as productivity-enhancing as possible.
  • For maximum success, consider implementing AI as a Proof of Concept (POC) in a test environment or a small region, then implement AI solutions more comprehensively with time.
  • Involve your user base throughout the process and encourage users to provide feedback for improvement.  

Assume threat actors are actively using AI.  

Here are just a few ways we know threat actors are leveraging AI for nefarious purposes: 

  • AI-based technology is being leveraged by nation-states for reconnaissance purposes and to conduct cyberattacks.
  • Generative AI has resulted in phishing emails that are much more professional in presentation and have much more believable text, making the messages effective gateways for ransomware attacks.
  • They are taking advantage of sensitive information that’s inadvertently entered by users into Large Language Models (LLMs). This UK National Cyber Security Centre article details the risk of entering sensitive data into LLMs.  

Big picture, your goal should be to leverage AI in a way that helps you to stay a step or more ahead of potential threat actors, by utilizing AI’s power to detect threats and auto-remediate security issues.  

Learn More  

To learn more about balancing AI adoption with cybersecurity, watch and share the replay of my FSI Summit session with Neil Jones from Egnyte, “Cybersecurity in the Age of AI: Cutting Through the Hype to Proactively Reduce Risk.”  

Share this Blog

Don’t miss an update

Subscribe today to our newsletter to get all the updates right in your inbox.

By submitting this form, you are acknowledging that you have read and understand Egnyte’s Privacy Policy.